41 research outputs found

    Continuous-time Mean-Variance Portfolio Selection with Stochastic Parameters

    Full text link
    This paper studies a continuous-time market {under stochastic environment} where an agent, having specified an investment horizon and a target terminal mean return, seeks to minimize the variance of the return with multiple stocks and a bond. In the considered model firstly proposed by [3], the mean returns of individual assets are explicitly affected by underlying Gaussian economic factors. Using past and present information of the asset prices, a partial-information stochastic optimal control problem with random coefficients is formulated. Here, the partial information is due to the fact that the economic factors can not be directly observed. Via dynamic programming theory, the optimal portfolio strategy can be constructed by solving a deterministic forward Riccati-type ordinary differential equation and two linear deterministic backward ordinary differential equations

    A New Optimization Algorithm for Single Hidden Layer Feedforward Neural Networks

    Get PDF
    Feedforward neural networks are the most commonly used function approximation techniques in neural networks. By the universal approximation theorem, it is clear that a single-hidden layer feedforward neural network (FNN) is sufficient to approximate the corresponding desired outputs arbitrarily close. Some researchers use genetic algorithms (GAs) to explore the global optimal solution of the FNN structure. However, it is rather time consuming to use GA for the training of FNN. In this paper, we propose a new optimization algorithm for a single-hidden layer FNN. The method is based on the convex combination algorithm for massaging information in the hidden layer. In fact, this technique explores a continuum idea which combines the classic mutation and crossover strategies in GA together. The proposed method has the advantage over GA which requires a lot of preprocessing works in breaking down the data into a sequence of binary codes before learning or mutation can apply. Also, we set up a new error function to measure the performance of the FNN and obtain the optimal choice of the connection weights and thus the nonlinear optimization problem can be solved directly. Several computational experiments are used to illustrate the proposed algorithm, which has good exploration and exploitation capabilities in search of the optimal weight for single hidden layer FNNs

    A new design method for broadband microphone arrays for speech input in automobiles

    Full text link

    On a real-time blind signal separation noise reduction system

    No full text
    Blind signal separation has been studied extensively in order to tackle the cocktail party problem. It explores spatial diversity of the received mixtures of sources by different sensors. By using the kurtosis measure, it is possible to select the source of interest out of a number of separated BSS outputs. Further noise cancellation can be achieved by adding an adaptive noise canceller (ANC) as postprocessing. However, the computation is rather intensive and an online implementation of the overall system is not straightforward. This paper intends to fill the gap by developing an FPGA hardware architecture to implement the system. Subband processing is explored and detailed functional operations are profiled carefully. The final proposed FPGA system is able to handle signals with sample rate over 20000 samples per second.</p

    Guest editorial: special issue on new trends in multimedia processing

    No full text

    A New Optimization Algorithm for Single Hidden Layer Feedforward Neural Networks

    No full text
    Feedforward neural networks are the most commonly used function approximation techniques in neural networks. By the universal approximation theorem, it is clear that a single-hidden layer feedforward neural network (FNN) is sufficient to approximate the corresponding desired outputs arbitrarily close. Some researchers use genetic algorithms (GAs) to explore the global optimal solution of the FNN structure. However, it is rather time consuming to use GA for the training of FNN. In this paper, we propose a new optimization algorithm for a single-hidden layer FNN. The method is based on the convex combination algorithm for massaging information in the hidden layer. In fact, this technique explores a continuum idea which combines the classic mutation and crossover strategies in GA together. The proposed method has the advantage over GA which requires a lot of preprocessing works in breaking down the data into a sequence of binary codes before learning or mutation can apply. Also, we set up a new error function to measure the performance of the FNN and obtain the optimal choice of the connection weights and thus the nonlinear optimization problem can be solved directly. Several computational experiments are used to illustrate the proposed algorithm, which has good exploration and exploitation capabilities in search of the optimal weight for single hidden layer FNNs
    corecore